A concept for Inferring ‘Frontier Research’ in Research Project Proposals
نویسندگان
چکیده
This paper discusses a concept for inferring attributes of ‘frontier research’ in peer-reviewed research project proposals under the European Research Council (ERC) scheme. The concept serves two purposes: 1) to conceptualize and define, automatically extract, and comparatively assess attributes of frontier research in proposals; and 2) to build and compare outcomes of a statistical model with the review decision in order to obtain further insight and reflect upon the influence of frontier research in the peer-review process. To this end, indicators (including scientific ‘novelty’, ‘risk’, or interdisciplinarity’) across scientific disciplines and in accord with the strategic definition of frontier research by the Council are elaborated, exploiting textual proposal information and other bibliometric data of applicants. Subsequently, a concept is discussed to measure ex-post the influence of indicators on the decision probability (or, alternatively, the odds) of a proposal to be accepted. The final analysis of the classification and decision probabilities compares and contrasts review decisions in order to, e.g., statistically explain congruence between frontier research and review decision or reveal differential representation of attributes. Ultimately, the concept is aiming at a methodology that monitors the effectiveness and efficiency of peer-review processes. Background and Objectives Scientific disciplines use peer review as an essential mechanism for resource allocation and quality control (Bornmann, 2010). It serves either to determine what research deserves to be published (ex-post review) or what proposed research deserves to be funded through national or regionally operating agencies (ex-ante review). Reviewers face the challenge to find out what is the “best” scientific research according to a journal’s/agency’s strategy. Typically, journal and grant schemes are sufficiently different that there are no “best practices”. Peer-review systems are rooted in critical rationalism, widely accepted and actively supported by the scientific community. Currently they are considered the most effective functioning quality control instrument at hand. Still they are not free form criticism on a number of issues, including poor reliability (congruent opinions); fairness (biased opinions of non-scientific merit); predictive validity (decisions in accord with merit to the scientific community); or inefficiency (e.g., delay, resources spent, opportunity cost). Several issues are ‘systemic’ in that biases and quality filtering are not necessarily incommensurable. E.g. making scientific progress is fundamentally a conservative act building incrementally upon previous work and requiring evidence resp. balance of risk/reward before publishing (funding) radical papers (ideas); or social system dynamics of subjective human interaction giving rise to lower reliability but offset by higher predictive validity (Bornmann, 2010; Powell, 2010). Because of its central role for not only the scientific community but also publishers/editors and funding agencies, monitoring peer review processes is essential to shed light on to what extent set goals are actually accomplished by review decisions. The need for monitoring effects and the implicit reorientation of peer review processes is subject to current research activities (Hojat et al. 2003, Sweizer & Collen 1994, Bornmann & Daniel 2008, Marsh et al. 2008). ∗ [email protected] Where the scope and specialization of science has become too complex and sophisticated to be coped with by any one reviever, scientometric methods offer a ‘helping hand’ to either support the decision-making process or evaluate its outcome. In fact, scientometric evaluation has been witnessing a significant attention in the rising need to get a grip on science output and efficiency (Besselaar & Leydesdorff 2009; Norden 2010). On the one hand, these methods carry strengths in that they are precisely defined and reliable, objective, efficient, and need no intervention; on the other hand, their weakness comes mainly in terms of limits of interpretation, applicability, confounding factors, and predictive validity – all of which are more or less debatable. Table 1 lists selected examples of scientometric indicators that are performance-centred and use easy-to-measure (in principle) volume data. Other indicators incorporate data extracted from networks of co-authorships, citations or bibliometric coupling; curriculum vitae, e.g., age, gender, institution (teaching, research); referee scores; or socioeconomic relevance (previously attracted, requested funds). Table 1. Selected classical scientometric indicators for measuring scientific performance. Indicator Measurement Interpretation options Authorship Number of publications or co-publications Research output, productivity Citation Number of citations Research influence, international impact Publication-citation h-index Research productivity and impact Journal impact factor Total number of journal citations in year n of papers published in years n-1 and n-2 over the number of articles published in n-1 and n-2 Reputation of scientific journals Online access Number of times a paper is accessed online in some time period T Global spread, attention in scientific community and beyond Network properties Social network parameters Beyond stand-alone, reflect system properties and influence in network interconnectedness and speed of information exchange Naturally a numerical model cannot be expected to substitute for expert peer review/process and the merit of the scientific community, each of which being hard to quantify. More faithfully models of this kind may serve to verify decisions, deliver support data efficiently, or hint at biases of the review process (Juznic et al. 2010). The nature and objectives in journal reviews (co-authorship, selection, improving quality of published research) is different from reviews of proposed research projects (individual investigator, allocation of resources to inherently risky, speculative projects). This is particularly evident in different expectations on predictive validity and, consequently, the choice of indicators tailored to the underlying strategy, mission and policy of funding bodies to establish interpretable and useful cause-effect relationships. Meanwhile discrepancies between the selection decision on the set of “best” proposals derived from review systems and scientometrically predictive identifications are expected. Because measures, numbers, and comparisons among peers can deliver a powerful message and impose normative behavior (Ariel 2010), a number of studies have shed more light on underlying reasons (e.g., Besselaar & Leydesdorff 2009; Bornmann, Leydesdorff & Besselaar 2009; Juznic et al. 2010). This paper looks at bibliometric evaluation of research project proposals: • From a grant point of view, it focuses on proposals submitted to the prestigious European Research Council (ERC) in the scientific domains “Physics & Engineering” (PE) and “Life Sciences” (LS). Scientists from all over the world, who are intending to work with a host institution based in a EU Member State or associated country, can compete for two different types of grants: Starting Grants (SGs) for investigators with 2-12 years of experience after their PhD at the stage of starting or consolidating their independent research team; and Advanced (AGs) for already established investigators with at least 10 years of experience and significant research achievements. Grants are to support pioneering, far-reaching research endeavours, combine high risk/high impact potential, break established disciplinary boundaries, or explore new productive lines of scientific enquiry, methodology or techniques. Each project can receive up to 2 m€ (SGs) or 3.5 m€ (AGs) for a maximum of 5 years (cf. Table 2). • From a methodological point of view, it complements the standard approach to scientific excellence, which is classically based on quantity of papers, citations, etc. (Bornmann, Leydesdorff, & Besselaar 2009), and takes into account textual features related to the content and quality of ‘frontier research’ (EC 2005) detectable in individual research proposals – being the sole criterion for awarding ERC grants to young or senior investigators (ERC 2008). Due to the single evaluation criteria (scientific excellence), ERC grants provide a suitable test-bed for content analysis/text-mining and modelling in the field of bibliometric evaluation (Yoon, Lee & lee 2010). The primary interest is the extent to which research proposal comply with attributes of frontier research and the influence of these attributes on the selection of awarded grants. Table 2. Number of proposals submitted and grants awarded by the ERC in 2007–2009. ERC grant (year) Total budget (m €) Number of proposals submitted Total number of grants awarded Number of grants awarded (in PE and LS) SG (2007) 335 9,167 299 242 AG (2008) 553 2,167 282 198 SG (2009) 325 2,503 244 187 AG (2009) 515 1,584 244 202 Source: ERC (2011); since 2009, SG and AG grants are awarded annually. The remainder of the paper is structured as follows. After introducing the ERC peer review system and the definition of frontier research, the outline a concept of the review process is presented. Subsequently scientometric and text-analytic methods (Roche et al. 2010; Schiebel et al. 2010) capturing desired attributes of frontier research are laid out. Then, a discrete choice model is adopted to approximate the selection function and how indicators influence the decision probability for a proposal to be accepted. Finally, a discussion of the concept closes the paper by elaborating a comparison of peer review process and model outputs and the assessment of the “influencing power” of the individual indicators. 2 PE (LS) holds ten (nine) main and ~170 (100) subcategories. The third domain “Social Sciences & Humanities” is not considered as it is expected to differ in terms of publishing, citation behaviour, and other features from those observed in PE and LS (e.g., national/regional orientation, less publications in form of articles, different theoretical ‘development rate’, number of authors, non-scholarly publications), which make it less assessable for approaches developed for the natural and life sciences (Nederhof 2006; Juznic et al. 2010). A concept based on joint scientometric and content analysis Peer review process set up by the ERC The first European research funding body targets research at the highest level of excellence in any scientific discipline. It supports investigator-driven projects aiming at broadening the scientific and technological knowledge without regard for established disciplinary boundaries (frontier research) through open and direct competition. The selection of proposals for is based strictly on peer review. The ERC has established a process which is to identify scientific excellence of frontier research as the sole evaluation criterion for funding decisions (ERC 2010). Internationally renowned scientists and scholars constitute two sets of review panels (for the SGs and AGs), each of which subdivided into 25 individual panels that cover the entire range of disciplines and fall into in the domains PE, LS, and SH. Each panel is composed of 10-12 members and headed by a chair. If further expertise is required, external reviewers may be consulted by providing assessments on a proposal-by-proposal basis. Table 3. Relation between the definition of frontier research and correspondence of indicators. Indicator Definition (Frontier research...) NOVELTY “... stands at the forefront of creating new knowledge and developing new understanding. Those involved are responsible for fundamental discoveries and advances in theoretical and empirical understanding, and even achieving the occasional revolutionary breakthrough that completely changes our knowledge of the world.” RISK “... is an intrinsically risky endeavour. In the new and most exciting research areas, the approach or trajectory that may prove most fruitful for developing the field is often not clear. Researchers must be bold and take risks. Indeed, only researchers are generally in a position to identify the opportunities of greatest promise. The task of funding agencies is confined to supporting the best researchers with the most exciting ideas, rather than trying to identify priorities.” PASTEURESQUENESS “... may well be concerned with both new knowledge about the world and with generating potentially useful knowledge at the same time. Therefore, there is a much closer and more intimate connection between the resulting science and technology, with few of the barriers that arise when basic research and applied research are carried out separately.” INTERDISCIPLINARITY “... pursues questions irrespective of established disciplinary boundaries. It may well involve multi-, interor trans-disciplinary research that brings together researchers from different disciplinary backgrounds, with different theoretical and conceptual approaches, techniques, methodologies and instrumentation, perhaps even different goals and motivations.
منابع مشابه
Contemporary methods for evaluating complex project proposals
The ability to evaluate project proposals, assessing future success, and organizational value is critical to overall business performance for most enterprises. Yet, predicting project success is difficult and often unreliable. A four-year field study shows that the effectiveness of available methods for evaluating and selecting large, complex project depends on the specific project type, org...
متن کاملAn Integrated Risk-Based Technique for Project Plan Selection
Selecting an effective project plan is a significant area in the project management. The present paper introduces a technique to identify the project plan efficient frontier for assessing the alternative project plans and selecting the best plan. The efficient frontier includes two criteria: the project cost and the project time. Besides, the paper presents a scheme to incorporate Directed Ac...
متن کاملDouble-Frontier Analysis Approach for Weight Derivation in Analytical Hierarchy Process
Data Envelopment Analysis (DEA) combined with Analytic Hierarchy Process (AHP) provides Data Envelopment Analytic Hierarchy Process (DEAHP) methodology for weight derivation and aggregation. A double-frontier analysis approach is proposed in this article to derive priorities in AHP. The proposed method combines DEA variable weight with AHP from both optimistic and pessimistic views to obtain th...
متن کاملLooking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science
Selecting among alternative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the "intellectual distance" between the knowledge embodied in research proposals and an evaluator's own expertise systematically relates to the evaluations given. To estimate relationships, we designed a...
متن کاملA Framework on Ontology Based Classification and Clustering for Grouping Research Proposals
The Research project proposals selection is an challenging task in organizations and companies. When large numbers of research proposals are collected then research Project Selection is a remarkable in all Research funding and Government agencies. The process of Assignment and Selection are unproblematic. The number of proposal submissions are very less. But with mounting in numbers both assign...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011